98 research outputs found

    Finding fault: causality and counterfactuals in group attributions.

    Get PDF
    Attributions of responsibility play a critical role in many group interactions. This paper explores the role of causal and counterfactual reasoning in blame attributions in groups. We develop a general framework that builds on the notion of pivotality: an agent is pivotal if she could have changed the group outcome by acting differently. In three experiments we test successive refinements of this notion - whether an agent is pivotal in close possible situations and the number of paths to achieve pivotality. In order to discriminate between potential models, we introduced group tasks with asymmetric structures. Some group members were complements (for the two to contribute to the group outcome it was necessary that both succeed) whereas others were substitutes (for the two to contribute to the group outcome it was sufficient that one succeeds). Across all three experiments we found that people's attributions were sensitive to the number of paths to pivotality. In particular, an agent incurred more blame for a team loss in the presence of a successful complementary peer than in the presence of a successful substitute

    Causal Responsibility and Counterfactuals.

    Get PDF
    How do people attribute responsibility in situations where the contributions of multiple agents combine to produce a joint outcome? The prevalence of over-determination in such cases makes this a difficult problem for counterfactual theories of causal responsibility. In this article, we explore a general framework for assigning responsibility in multiple agent contexts. We draw on the structural model account of actual causation (e.g., Halpern & Pearl, 2005) and its extension to responsibility judgments (Chockler & Halpern, 2004). We review the main theoretical and empirical issues that arise from this literature and propose a novel model of intuitive judgments of responsibility. This model is a function of both pivotality (whether an agent made a difference to the outcome) and criticality (how important the agent is perceived to be for the outcome, before any actions are taken). The model explains empirical results from previous studies and is supported by a new experiment that manipulates both pivotality and criticality. We also discuss possible extensions of this model to deal with a broader range of causal situations. Overall, our approach emphasizes the close interrelations between causality, counterfactuals, and responsibility attributions

    Challenging the role of implicit processes in probabilistic category learning

    Get PDF
    Considerable interest in the hypothesis that different cognitive tasks recruit qualitatively distinct processing systems has led to the proposal of separate explicit (declarative) and implicit (procedural) systems. A popular probabilistic category learning task known as the weather prediction task is said to be ideally suited to examine this distinction because its two versions, '' observation '' and '' feedback,'' are claimed to recruit the declarative and procedural systems, respectively. In two experiments, we found results that were inconsistent with this interpretation. In Experiment 1, a concurrent memory task had a detrimental effect on the implicit (feedback) version of the task. In Experiment 2, participants displayed comparable and accurate insight into the task and their judgment processes in the feedback and observation versions. These findings have important implications for the study of probabilistic category learning in both normal and patient populations

    Beyond Covariation: Cues to Causal Structure

    Get PDF
    Causal induction has two components: learning about the structure of causal models and learning about causal strength and other quantitative parameters. This chapter argues for several interconnected theses. First, people represent causal knowledge qualitatively, in terms of causal structure; quantitative knowledge is derivative. Second, people use a variety of cues to infer causal structure aside from statistical data (e.g. temporal order, intervention, coherence with prior knowledge). Third, once a structural model is hypothesized, subsequent statistical data are used to confirm, refute, or elaborate the model. Fourth, people are limited in the number and complexity of causal models that they can hold in mind to test, but they can separately learn and then integrate simple models, and revise models by adding and removing single links. Finally, current computational models of learning need further development before they can be applied to human learning

    Time reordered: Causal perception guides the interpretation of temporal order

    Get PDF
    We present a novel temporal illusion in which the perceived order of events is dictated by their perceived causal relationship. Participants view a simple Michotte-style launching sequence featuring 3 objects, in which one object starts moving before its presumed cause. Not only did participants re-order the events in a causally consistent way, thus violating the objective temporal order, but they also failed to recognise the clip they had seen, preferring a clip in which temporal and causal order matched. We show that the effect is not due to lack of attention to the presented events and we discuss the problem of determining whether causality affects temporal order at an early perceptual stage or whether it distorts an accurately perceived order during retrieval. Alternatively, we propose a mechanism by which temporal order is neither misperceived nor misremembered but inferred “on-demand” given phenomenal causality and the temporal priority principle, the assumption that causes precede their effects. Finally, we discuss how, contrary to theories of causal perception, impressions of causality can be generated from dynamic sequences with strong spatiotemporal deviations

    Models of probabilistic category learning in Parkinson's disease: Strategy use and the effects of L-dopa

    Get PDF
    Probabilistic category learning (PCL) has become an increasingly popular paradigm to study the brain bases of learning and memory. It has been argued that PCL relies on procedural habit learning, which is impaired in Parkinson's disease (PD). However, as PD patients were typically tested under medication, it is possible that levodopa (L-dopa) caused impaired performance in PCL. We present formal models of rule-based strategy switching in PCL, to re-analyse the data from [Jahanshahi, M., Wilkinson, L, Gahir, H., Dharminda, A., & Lagnado, D.A., (2009). Medication impairs probabilistic classification learning in Parkinson's disease. Manuscript submitted for publication] comparing PD patients on and off medication (within subjects) to matched controls. Our analysis shows that PD patients followed a similar strategy switch process as controls when off medication, but not when on medication. On medication, PD patients mainly followed a random guessing strategy, with only few switching to the better Single Cue strategies. PD patients on medication and controls made more use of the optimal Multi-Cue strategy. In addition, while controls and PD patients off medication only switched to strategies which did not decrease performance, strategy switches of PD patients on medication were not always directed as such. Finally, results indicated that PD patients on medication responded according to a probability matching strategy indicative of associative learning, while the behaviour of PD patients off medication and controls was consistent with a rule-based hypothesis testing procedure. (C) 2009 Elsevier Inc. All rights reserved

    Staying afloat on Neurath's boat - Heuristics for sequential causal learning

    Get PDF
    Causal models are key to flexible and efficient exploitation of the environment. However, learning causal structure is hard, with massive spaces of possible models, hard-to-compute marginals and the need to integrate diverse evidence over many instances. We report on two experiments in which participants learnt about probabilistic causal systems involving three and four variables from sequences of interventions. Participants were broadly successful, albeit exhibiting sequential dependence and floundering under high background noise. We capture their behavior with a simple model, based on the “Neurath’s ship” metaphor for scientific progress, that neither maintains a probability distribution, nor computes exact likelihoods

    A causal framework for integrating learning and reasoning

    Get PDF
    Can the phenomena of associative learning be replaced wholesale by a propositional reasoning system? Mitchell et al. make a strong case against an automatic, unconscious, and encapsulated associative system. However, their propositional account fails to distinguish inferences based on actions from those based on observation. Causal Bayes networks remedy this shortcoming, and also provide an overarching framework for both learning and reasoning. On this account, causal representations are primary, but associative learning processes are not excluded a priori

    Concreteness and abstraction in everyday explanation

    Get PDF
    A number of philosophers argue for the value of abstraction in explanation. According to these prescriptive theories, an explanation becomes superior when it leaves out details that make no difference to the occurrence of the event one is trying to explain (the explanandum). Abstract explanations are not frugal placeholders for improved, detailed future explanations but are more valuable than their concrete counterparts because they highlight the factors that do the causal work, the factors in the absence of which the explanandum would not occur. We present several experiments that test whether people follow this prescription (i.e., whether people prefer explanations with abstract difference makers over explanations with concrete details and explanations that omit descriptively accurate but causally irrelevant information). Contrary to the prescription, we found a preference for concreteness and detail. Participants rated explanations with concrete details higher than their abstract counterparts and in many cases they did not penalize the presence of causally irrelevant details. Nevertheless, causality still constrained participants' preferences: They downgraded concrete explanations that did not communicate the critical causal properties

    A counterfactual simulation model of causal judgments for physical events

    Get PDF
    How do people make causal judgments about physical events? We introduce the counterfactual simulation model (CSM) which predicts causal judgments in physical settings by comparing what actually happened with what would have happened in relevant counterfactual situations. The CSM postulates different aspects of causation that capture the extent to which a cause made a difference to whether and how the outcome occurred, and whether the cause was sufficient and robust. We test the CSM in several experiments in which participants make causal judgments about dynamic collision events. A preliminary study establishes a very close quantitative mapping between causal and counterfactual judgments. Experiment 1 demonstrates that counterfactuals are necessary for explaining causal judgments. Participants' judgments differed dramatically between pairs of situations in which what actually happened was identical, but where what would have happened differed. Experiment 2 features multiple candidate causes and shows that participants' judgments are sensitive to different aspects of causation. The CSM provides a better fit to participants' judgments than a heuristic model which uses features based on what actually happened. We discuss how the CSM can be used to model the semantics of different causal verbs, how it captures related concepts such as physical support, and how its predictions extend beyond the physical domain. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
    corecore